In this file, I will run linear regression analyses.
First, we load necessary libraries.
library(tidyverse) # A whole bunch of packages necessary for data analysis
library(broom) # To effectively clean up model objects
Load data from the previous step.
variables <- readRDS(file = "../data/processed/variables.rds")
annotation <- readRDS(file = "../data/processed/annotation.rds")
data_comb <- readRDS(file = "../data/processed/data_comb.rds")
What kind of model strategy will we use? There are many options.
First of all, what is the outcome of interest?
End-of-study difference between randomized groups
Within-group change from baseline
Between-group change from baseline
Comments:
The correct way to analyze data in a parallel-group randomized clinical trial is to analyze the end-of-study difference between groups, adjusted for necessary covariates AND baseline value.
- Frank Harrell
\(Y = \alpha + \beta_1 * T + \beta_2 * Y_0\)
Where \(T\) is treatment, \(Y\) is the variable value at end-of-study, \(Y_0\) is the variable value at baseline, \(\alpha\) is the intercept, and \(\beta_1\) and \(\beta_2\) are the regression coefficients for treatment and variable value at baseline, respectively.
Next, we need to decide on type of model:
Comments:
However, here we will go through ordinary linear regression, as this setup is pretty useful for most studies and situations. The general strategy can be extended to other situations, once we know the basics.
It’s imperative to have a thorough understanding of the model definitions. Draw up DAGs, and visualize the connections between different variables. At least you need to define:
If you use a tool such as DAGitty, then you can get minimal sufficient adjustment set for either the total or direct effect of the exposure on the outcome.
Subset the covariates from baseline. Since there was a slight weight loss in the intervention group, we need to take this into account; therefore, subset the change in weight and add that to the covariate data frame.
data_covariates <- data_comb %>%
filter(time == "base") %>%
select(id, group, age, gender, bmi, hypertensive_med, myocardialinfarction, smoking2, ldlc) %>%
left_join(
data_comb %>% filter(time == "delta") %>% select(id, group, weight),
by = c("id", "group"))
data_covariates
Next, prepare the data frame with baseline and end-of-study values in wide format.
# Baseline Nightingale variables
data_base <- data_comb %>%
filter(time == "base") %>%
select(id, group, variables$nightingale)
# End-of-study Nightingale variables
data_end <- data_comb %>%
filter(time == "end") %>%
select(id, group, variables$nightingale)
# Join to a wide format
data_wide <- left_join(
data_end,
data_base,
by = c("id", "group"),
suffix = c("_end", "_base"))
data_wide
Update the wide data with selected covariates as well.
data_wide <- left_join(data_covariates, data_wide, by = c("id", "group"))
Great, now the data frame is almost ready for modeling.
Prior to modeling, we need to transform data to approximate normality. There are many ways to do this, but one common way is to use a log transformation. There are many other possible transformations, and we will look at a few.
To determine which variables should be transformed, we first calculate the skewness of all baseline and end-of-study variables. To this, we will use the e1071::skewness function. This returns a value around zero for each variable, and can be interpreted like so:
To get a grasp of the variation, we do what we always do: we plot the result (in the case as a waterfall plot).
data_wide_skew <- data_wide %>%
select(matches("_base|_end")) %>%
summarise_all(e1071::skewness, na.rm = TRUE) %>%
pivot_longer(cols = names(.), names_to = "variables", values_to = "value")
data_wide_skew %>%
ggplot(aes(fct_reorder(variables, value), value)) +
geom_col(fill = RColorBrewer::brewer.pal(n = 6, name = "Pastel1")[2]) +
geom_hline(yintercept = 1, color = "grey", linetype = "dashed") +
geom_hline(yintercept = -1, color = "grey", linetype = "dashed") +
theme_classic() +
theme(axis.text.x = element_blank(),
axis.ticks.x = element_blank()) +
labs(x = "Variable", y = "Skewness", title = "Skewness for all variables")
Obviously, we see quite a lot of variables that present with skewness above or below the thresholds mentioned. We will choose a transformation for each of the two classes, and then apply that to the variables. Then we will visualize the before-and-after distributions.
Some of the transformations that can be considered are:
Here, I have defined the three latter, as I don’t know if they are part of a package in R already.
inverse <- function(x) {1/x}
cuberoot <- function(x) {x^(1/3)}
square <- function(x) {x^2}
Pull out the variables that are skewed.
data_wide_skew_left <- data_wide_skew %>% filter(value < -1) %>% pull("variables")
data_wide_skew_right <- data_wide_skew %>% filter(value > 1) %>% pull("variables")
Plot the left skewed variables before and after transformation. Use a function for efficiency.
plot_histograms <- function(variables, before_after = "before", direction = "left") {
variables %>%
pivot_longer(names(.)) %>%
ggplot(aes(value)) +
geom_histogram() +
facet_wrap(~name, scales = "free") +
labs(x = NULL, y = "Count",
title = paste0("Distributions ", before_after, " transformation: ", direction, "-skewed variables"))
}
data_wide %>%
select(data_wide_skew_left) %>%
plot_histograms()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
data_wide %>%
select(data_wide_skew_left) %>%
mutate_at(data_wide_skew_left, square) %>%
plot_histograms(before_after = "after")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
data_wide %>%
select(data_wide_skew_right) %>%
plot_histograms(direction = "right")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
data_wide %>%
select(data_wide_skew_right) %>%
mutate_at(data_wide_skew_right, log) %>%
plot_histograms(before_after = "after", direction = "right")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
I’m happy with the way the transformations worked out, and will apply these to the data.
data_wide <- data_wide %>%
mutate_at(data_wide_skew_left, square) %>%
mutate_at(data_wide_skew_right, log)
Let’s run a regression to see if things work.
fit <- lm(`XXL-VLDL-P_end` ~ group + `XXL-VLDL-P_base`, data = data_wide)
# Print model object
fit
##
## Call:
## lm(formula = `XXL-VLDL-P_end` ~ group + `XXL-VLDL-P_base`, data = data_wide)
##
## Coefficients:
## (Intercept) groupintervention `XXL-VLDL-P_base`
## -3.8188 -0.2187 0.8397
# Use the summary function to get a more complete summary
summary(fit)
##
## Call:
## lm(formula = `XXL-VLDL-P_end` ~ group + `XXL-VLDL-P_base`, data = data_wide)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.80588 -0.35931 0.00095 0.24198 1.19650
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -3.8188 2.8544 -1.338 0.187
## groupintervention -0.2187 0.1306 -1.675 0.100
## `XXL-VLDL-P_base` 0.8397 0.1235 6.799 1.49e-08 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.451 on 48 degrees of freedom
## (48 observations deleted due to missingness)
## Multiple R-squared: 0.4916, Adjusted R-squared: 0.4704
## F-statistic: 23.2 on 2 and 48 DF, p-value: 8.91e-08
This worked well. What does the ‘fit’ object contain?
names(fit)
## [1] "coefficients" "residuals" "effects" "rank"
## [5] "fitted.values" "assign" "qr" "df.residual"
## [9] "na.action" "contrasts" "xlevels" "call"
## [13] "terms" "model"
What we normally want is:
For this, we can use the broom::tidy function. Set conf.int to TRUE to get the 95 % CIs (we can choose other CIs if we want to).
tidy(fit, conf.int = TRUE)
Let’s try to plot the coefficients.
fit %>%
tidy(conf.int = TRUE) %>%
ggplot(aes(term, estimate, fill = p.value)) +
geom_hline(yintercept = 0, color = "grey60", linetype = "dashed") +
geom_pointrange(aes(ymin = conf.low, ymax = conf.high), shape = 21) +
coord_flip() +
scale_fill_distiller() +
theme_classic() +
theme(legend.position = "bottom")
For this, we can use the broom::glance function.
glance(fit)
For this, we can use the broom::augment function.
augment(fit)
Let’s try to plot the residuals versus fitted values.
fit %>%
augment() %>%
ggplot(aes(.fitted, .resid)) +
geom_hline(yintercept = 0, color = "grey", linetype = "dashed") +
geom_point() +
theme_classic() +
labs(title = "XXL.VLDL.P_end")
# Put this into a function (useful to map over many data and variables)
plot_fitted_resids <- function(data, variable) {
data %>%
ggplot(aes(.fitted, .resid)) +
geom_hline(yintercept = 0, color = "grey", linetype = "dashed") +
geom_point() +
theme_classic() +
labs(title = variable)
}
All these objects are relatively complex, and to actually run these tests effectively, we need to use a clever setup (see next section: Many models).
# The end-values
data_wide_end <- data_wide %>% select(ends_with("_end"))
# The base-values
data_wide_base <- data_wide %>% select(ends_with("_base"))
# Map over all pairs
fit_many <- map2(
.x = data_wide_end,
.y = data_wide_base,
~lm(.x ~ group + .y, data = data_wide)
) %>%
# Enframe the list results so it's easier to work with
enframe(name = "variable", value = "fit")
Add new variables by mapping over the ‘fit’ object.
fit_many_clean <- fit_many %>%
mutate(
# Change variable names
variable = str_replace_all(variable, "_end", ""),
# Coefficients
tidy = map(fit, ~tidy(.x, conf.int = TRUE) %>% filter(str_detect(term, "group"))),
# Model-level stats
glance = map(fit, glance),
# Individual level stats
augment = map(fit, augment),
# Plots
fitted_resids = map2(augment, variable, plot_fitted_resids)
)
Pull out the results.
fit_results <- fit_many_clean %>%
select(variable, tidy, glance) %>%
unnest()
fit_results
We have a lot of figures available; here is a random figure.
fit_many_clean$fitted_resids[[13]]
We want to skim through these in an effective manner. Therefore, put all the figures into a single file with multiple pages, each with 8 plots per page.
fit_many_clean %>%
pluck("fitted_resids") %>%
gridExtra::marrangeGrob(nrow = 4, ncol = 2) %>%
ggsave(filename = "../results/figures/fitted-resids.pdf", width = 6, height = 9)
By glancing through all the residuals, we get a feeling that the models seem to work pretty well.
Now we have basically completed the first model - the most elemental model with no covariate adjustments. In Ulven SM AJCN 2019, we investigated several models and covariate adjustment levels, some of which were:
Let’s run those too. I’ve compressed the code as much as possible, so as to save space. I have done this simply by creating a function that takes the list of lm models, and returns the results in a nicely organized data frame. Then all I need to do is to pull out the coefficients (estimates, 95 % confidence intervals and P values). Here is the function to do just that.
clean_results <- function(list_of_lm_results) {
list_of_lm_results %>%
enframe(name = "variable", value = "fit") %>%
mutate(
variable = str_replace_all(variable, "_end", ""),
tidy = map(fit, ~tidy(.x, conf.int = TRUE) %>% filter(str_detect(term, "group"))),
glance = map(fit, glance),
augment = map(fit, augment),
fitted_resids = map2(augment, variable, plot_fitted_resids)) %>%
select(-fit)
}
Additionally, let’s wrap the code to produce fitted-resid plots to separate files in a function too.
save_plots <- function(lm_list_column, adjustment) {
lm_list_column %>%
pluck("fitted_resids") %>%
gridExtra::marrangeGrob(nrow = 4, ncol = 2) %>%
ggsave(filename = paste0("../results/figures/fitted-resids-", adjustment, ".pdf"),
width = 6, height = 9)
}
Now use these functions to get results for the above-mentioned adjustment levels.
# Age and gender
fit_many_ag <- map2(
.x = data_wide %>% select(ends_with("_end")),
.y = data_wide %>% select(ends_with("_base")),
~lm(.x ~ group + .y + age + gender, data = data_wide)) %>%
clean_results()
save_plots(fit_many_ag, "ag")
# Age, gender and weight change
fit_many_agw <- map2(
.x = data_wide %>% select(ends_with("_end")),
.y = data_wide %>% select(ends_with("_base")),
~lm(.x ~ group + .y + age + gender + weight, data = data_wide)) %>%
clean_results()
save_plots(fit_many_agw, "agw")
Finally, we put all coefficients into a single data frame and, for each adjustment level, add FDR correction.
results_group <- list(
"None" = fit_results,
"Age and gender" = fit_many_ag %>% select(variable, tidy, glance) %>% unnest(),
"Age, gender and weight change" = fit_many_agw %>% select(variable, tidy, glance) %>% unnest()
) %>%
bind_rows(.id = "adj") %>%
group_by(adj) %>%
mutate(fdr = p.adjust(p.value, method = "fdr")) %>%
ungroup()
results_group
Great, now we have most of our models.
Let’s plot the distribution of P values and FDR q values.
results_group %>%
ggplot(aes(p.value)) +
geom_histogram(fill = "red") +
geom_freqpoly(aes(fdr), color = "blue") +
facet_wrap(~adj) +
theme_light()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
Now, in order to actually plot these models in forestplots, we will rerun the analyses, now with all variables normalized to have mean = 0 and sd = 1 using the scale function.
data_wide_scaled <- data_wide %>%
mutate_at(vars(matches("_end|_base")), scale)
# No adjustments
fit_many_scaled <- map2(
.x = data_wide_scaled %>% select(ends_with("_end")),
.y = data_wide_scaled %>% select(ends_with("_base")),
~lm(.x ~ group + .y, data = data_wide_scaled)) %>%
clean_results() %>%
select(variable, tidy, glance) %>%
unnest()
# Age and gender
fit_many_scaled_ag <- map2(
.x = data_wide_scaled %>% select(ends_with("_end")),
.y = data_wide_scaled %>% select(ends_with("_base")),
~lm(.x ~ group + .y + age + gender, data = data_wide_scaled)) %>%
clean_results() %>%
select(variable, tidy, glance) %>%
unnest()
# Age, gender and weight change
fit_many_scaled_agw <- map2(
.x = data_wide_scaled %>% select(ends_with("_end")),
.y = data_wide_scaled %>% select(ends_with("_base")),
~lm(.x ~ group + .y + age + gender + weight, data = data_wide_scaled)) %>%
clean_results() %>%
select(variable, tidy, glance) %>%
unnest()
Again, put them all into a common data frame and add FDR; also, cut both P values and FDR q values into commonly used groups.
results_scaled_group <- list(
"None" = fit_many_scaled,
"Age and gender" = fit_many_scaled_ag,
"Age, gender and weight change" = fit_many_scaled_agw
) %>%
bind_rows(.id = "adj") %>%
group_by(adj) %>%
mutate(
fdr = p.adjust(p.value, method = "fdr"),
p.value.group = case_when(
p.value < 0.001 ~ "< 0.001",
p.value < 0.01 ~ "< 0.01",
p.value < 0.05 ~ "< 0.05",
p.value >= 0.05 ~ "\u2265 0.05",
TRUE ~ NA_character_) %>%
factor(levels = c("\u2265 0.05", "< 0.05", "< 0.01", "< 0.001")),
fdr.group = case_when(
fdr < 0.05 ~ "< 0.05",
fdr < 0.10 ~ "< 0.10",
fdr < 0.15 ~ "< 0.15",
fdr >= 0.20 ~ "\u2265 0.20",
TRUE ~ NA_character_) %>%
factor(levels = c("\u2265 0.20", "< 0.15", "< 0.10", "< 0.05"))
) %>%
ungroup()
results_scaled_group
This looks great.
A main point of the NoMa study is that exchanging SFA by PUFA will reduce LDL-C. And indeed, this is what happened is the intervention: the intervention arm reduced the TC and LDL-C by ~10 %, while there was no change in the control arm. Naturally, we would assume that changes in the Nigtingale metabolome associates strongly with changes in LDL-C. One way to test this is to look at the cross-sectional association between LDL-C and Nightingale metabolites at baseline, and whether these associations are “normalized” by the intervention. To test this hypothesis in a global manner, we can then plot a scatteplot between the baseline, cross-sectional \(\beta\) coefficients (x axis), and the intervention-based group effect \(\beta\) coefficients (y axis).
Therefore, we run a single model with LDL-C as the exposure (at baseline), with adjustment for age, gender and weight.
results_scaled_ldlc <- map(
.x = data_wide_scaled %>% select(ends_with("_base")),
~lm(.x ~ ldlc + age + gender + bmi, data = data_wide_scaled)) %>%
enframe(name = "variable", value = "fit") %>%
mutate(
tidy = map(fit, ~tidy(.x, conf.int = TRUE) %>% filter(str_detect(term, "ldlc"))),
glance = map(fit, glance)) %>%
select(variable, tidy, glance) %>%
unnest() %>%
mutate(variable = str_replace_all(variable, "_base", ""),
adj = "Age, gender and BMI",
fdr = p.adjust(p.value, method = "fdr"))
results_scaled_ldlc
Fantastic.
This concludes the script. Some conclusions and take-homes:
Here we save those files that we will use in downstream work.
# RDS files
saveRDS(results_scaled_group, file = "../data/processed/results_scaled_group.rds")
saveRDS(results_scaled_ldlc, file = "../data/processed/results_scaled_ldlc.rds")
saveRDS(results_group, file = "../data/processed/results_group.rds")
# Excel files
list(
"group" = results_group,
"scaled_group" = results_scaled_group,
"scaled_ldlc" = results_scaled_ldlc
) %>%
openxlsx::write.xlsx(file = "../results/tables/results_models.xlsx", colWidths = "auto")
To improve reproducibility, print out the session info for this script.
devtools::session_info()
## - Session info ----------------------------------------------------------
##
## - Packages --------------------------------------------------------------
## package * version date lib source
## acepack 1.4.1 2016-10-29 [1] CRAN (R 3.6.0)
## assertthat 0.2.1 2019-03-21 [1] CRAN (R 3.6.0)
## backports 1.1.4 2019-04-10 [1] CRAN (R 3.6.0)
## base64enc 0.1-3 2015-07-28 [1] CRAN (R 3.6.0)
## broom * 0.5.2 2019-04-07 [1] CRAN (R 3.6.0)
## callr 3.3.0 2019-07-04 [1] CRAN (R 3.6.1)
## cellranger 1.1.0 2016-07-27 [1] CRAN (R 3.6.0)
## checkmate 1.9.4 2019-07-04 [1] CRAN (R 3.6.1)
## class 7.3-15 2019-01-01 [2] CRAN (R 3.6.0)
## cli 1.1.0 2019-03-19 [1] CRAN (R 3.6.0)
## cluster 2.0.8 2019-04-05 [2] CRAN (R 3.6.0)
## colorspace 1.4-1 2019-03-18 [1] CRAN (R 3.6.0)
## crayon 1.3.4 2017-09-16 [1] CRAN (R 3.6.0)
## data.table 1.12.2 2019-04-07 [1] CRAN (R 3.6.0)
## DBI 1.0.0 2018-05-02 [1] CRAN (R 3.6.0)
## desc 1.2.0 2018-05-01 [1] CRAN (R 3.6.0)
## devtools 2.1.0 2019-07-06 [1] CRAN (R 3.6.1)
## digest 0.6.20 2019-07-04 [1] CRAN (R 3.6.1)
## dplyr * 0.8.2 2019-06-29 [1] CRAN (R 3.6.0)
## e1071 1.7-2 2019-06-05 [1] CRAN (R 3.6.0)
## ellipsis 0.2.0.1 2019-07-02 [1] CRAN (R 3.6.1)
## evaluate 0.14 2019-05-28 [1] CRAN (R 3.6.0)
## fansi 0.4.0 2018-10-05 [1] CRAN (R 3.6.0)
## farver 1.1.0 2018-11-20 [1] CRAN (R 3.6.1)
## forcats * 0.4.0 2019-02-17 [1] CRAN (R 3.6.0)
## foreign 0.8-71 2018-07-20 [2] CRAN (R 3.6.0)
## Formula 1.2-3 2018-05-03 [1] CRAN (R 3.6.0)
## fs 1.3.1 2019-05-06 [1] CRAN (R 3.6.0)
## generics 0.0.2 2018-11-29 [1] CRAN (R 3.6.0)
## ggforce * 0.3.1 2019-08-20 [1] CRAN (R 3.6.1)
## ggplot2 * 3.2.1 2019-08-10 [1] CRAN (R 3.6.1)
## ggrepel 0.8.1 2019-05-07 [1] CRAN (R 3.6.1)
## glue 1.3.1 2019-03-12 [1] CRAN (R 3.6.0)
## gridExtra * 2.3 2017-09-09 [1] CRAN (R 3.6.1)
## gtable 0.3.0 2019-03-25 [1] CRAN (R 3.6.0)
## haven * 2.1.1 2019-07-04 [1] CRAN (R 3.6.1)
## highr 0.8 2019-03-20 [1] CRAN (R 3.6.0)
## Hmisc 4.2-0 2019-01-26 [1] CRAN (R 3.6.0)
## hms 0.5.0 2019-07-09 [1] CRAN (R 3.6.0)
## htmlTable 1.13.1 2019-01-07 [1] CRAN (R 3.6.0)
## htmltools 0.3.6 2017-04-28 [1] CRAN (R 3.6.0)
## htmlwidgets 1.3 2018-09-30 [1] CRAN (R 3.6.0)
## httr 1.4.0 2018-12-11 [1] CRAN (R 3.6.0)
## inline 0.3.15 2018-05-18 [1] CRAN (R 3.6.1)
## jsonlite 1.6 2018-12-07 [1] CRAN (R 3.6.0)
## knitr * 1.23 2019-05-18 [1] CRAN (R 3.6.0)
## labeling 0.3 2014-08-23 [1] CRAN (R 3.6.0)
## labelled 2.2.1 2019-05-26 [1] CRAN (R 3.6.1)
## lattice 0.20-38 2018-11-04 [2] CRAN (R 3.6.0)
## latticeExtra 0.6-28 2016-02-09 [1] CRAN (R 3.6.0)
## lazyeval 0.2.2 2019-03-15 [1] CRAN (R 3.6.0)
## lifecycle 0.1.0 2019-08-01 [1] CRAN (R 3.6.1)
## loo 2.1.0 2019-03-13 [1] CRAN (R 3.6.1)
## lubridate 1.7.4 2018-04-11 [1] CRAN (R 3.6.0)
## magrittr 1.5 2014-11-22 [1] CRAN (R 3.6.0)
## MASS 7.3-51.4 2019-03-31 [2] CRAN (R 3.6.0)
## Matrix 1.2-17 2019-03-22 [2] CRAN (R 3.6.0)
## matrixStats 0.54.0 2018-07-23 [1] CRAN (R 3.6.1)
## memoise 1.1.0 2017-04-21 [1] CRAN (R 3.6.0)
## mitools 2.4 2019-04-26 [1] CRAN (R 3.6.1)
## modelr 0.1.4 2019-02-18 [1] CRAN (R 3.6.0)
## munsell 0.5.0 2018-06-12 [1] CRAN (R 3.6.0)
## nlme 3.1-139 2019-04-09 [2] CRAN (R 3.6.0)
## nnet 7.3-12 2016-02-02 [2] CRAN (R 3.6.0)
## openxlsx * 4.1.2 2019-10-29 [1] CRAN (R 3.6.1)
## packrat 0.5.0 2018-11-14 [1] CRAN (R 3.6.1)
## pheatmap 1.0.12 2019-01-04 [1] CRAN (R 3.6.0)
## pillar 1.4.2 2019-06-29 [1] CRAN (R 3.6.0)
## pkgbuild 1.0.3 2019-03-20 [1] CRAN (R 3.6.0)
## pkgconfig 2.0.2 2018-08-16 [1] CRAN (R 3.6.0)
## pkgload 1.0.2 2018-10-29 [1] CRAN (R 3.6.0)
## plyr 1.8.4 2016-06-08 [1] CRAN (R 3.6.0)
## polyclip 1.10-0 2019-03-14 [1] CRAN (R 3.6.0)
## prettyunits 1.0.2 2015-07-13 [1] CRAN (R 3.6.0)
## processx 3.4.0 2019-07-03 [1] CRAN (R 3.6.1)
## ps 1.3.0 2018-12-21 [1] CRAN (R 3.6.0)
## purrr * 0.3.2 2019-03-15 [1] CRAN (R 3.6.0)
## R6 2.4.0 2019-02-14 [1] CRAN (R 3.6.0)
## RColorBrewer 1.1-2 2014-12-07 [1] CRAN (R 3.6.0)
## Rcpp 1.0.1 2019-03-17 [1] CRAN (R 3.6.0)
## readr * 1.3.1 2018-12-21 [1] CRAN (R 3.6.0)
## readxl * 1.3.1 2019-03-13 [1] CRAN (R 3.6.0)
## remotes 2.1.0 2019-06-24 [1] CRAN (R 3.6.0)
## reshape2 1.4.3 2017-12-11 [1] CRAN (R 3.6.0)
## rlang 0.4.0 2019-06-25 [1] CRAN (R 3.6.0)
## rmarkdown * 1.13 2019-05-22 [1] CRAN (R 3.6.0)
## rpart 4.1-15 2019-04-12 [2] CRAN (R 3.6.0)
## rprojroot 1.3-2 2018-01-03 [1] CRAN (R 3.6.0)
## rstan 2.19.2 2019-07-09 [1] CRAN (R 3.6.1)
## rstudioapi 0.10 2019-03-19 [1] CRAN (R 3.6.0)
## rvest 0.3.4 2019-05-15 [1] CRAN (R 3.6.0)
## scales 1.0.0 2018-08-09 [1] CRAN (R 3.6.0)
## sessioninfo 1.1.1 2018-11-05 [1] CRAN (R 3.6.0)
## StanHeaders 2.19.0 2019-09-07 [1] CRAN (R 3.6.1)
## stringi 1.4.3 2019-03-12 [1] CRAN (R 3.6.0)
## stringr * 1.4.0 2019-02-10 [1] CRAN (R 3.6.0)
## survey 3.36 2019-04-27 [1] CRAN (R 3.6.1)
## survival 2.44-1.1 2019-04-01 [2] CRAN (R 3.6.0)
## tableone 0.10.0 2019-02-17 [1] CRAN (R 3.6.1)
## testthat 2.1.1 2019-04-23 [1] CRAN (R 3.6.1)
## tibble * 2.1.3 2019-06-06 [1] CRAN (R 3.6.0)
## tidyr * 1.0.0 2019-09-11 [1] CRAN (R 3.6.1)
## tidyselect 0.2.5 2018-10-11 [1] CRAN (R 3.6.0)
## tidyverse * 1.2.1 2017-11-14 [1] CRAN (R 3.6.1)
## tinytex 0.14 2019-06-25 [1] CRAN (R 3.6.0)
## tweenr 1.0.1 2018-12-14 [1] CRAN (R 3.6.1)
## usethis 1.5.1 2019-07-04 [1] CRAN (R 3.6.1)
## utf8 1.1.4 2018-05-24 [1] CRAN (R 3.6.0)
## vctrs 0.2.0 2019-07-05 [1] CRAN (R 3.6.1)
## withr 2.1.2 2018-03-15 [1] CRAN (R 3.6.0)
## xfun 0.8 2019-06-25 [1] CRAN (R 3.6.0)
## xml2 1.2.0 2018-01-24 [1] CRAN (R 3.6.0)
## yaml 2.2.0 2018-07-25 [1] CRAN (R 3.6.0)
## zeallot 0.1.0 2018-01-28 [1] CRAN (R 3.6.0)
## zip 2.0.4 2019-09-01 [1] CRAN (R 3.6.1)
## zoo 1.8-6 2019-05-28 [1] CRAN (R 3.6.1)
##
## [1] C:/Users/jacobjc/Documents/R/win-library/3.6
## [2] C:/Program Files/R/R-3.6.0/library